Quadratic programming (QP) is a special type of mathematical optimization problem. It is the problem of optimizing (minimizing or maximizing) a quadratic function of several variables subject to linear constraints on these variables.
The quadratic programming problem can be formulated as:[1]
Assume x belongs to space. Both x and c are column vectors with n elements (n×1 matrices), and Q is a symmetric n×n matrix.
Minimize (with respect to x)
Subject to one or more constraints of the form:
where indicates the vector transpose of . The notation means that every entry of the vector is less than or equal to the corresponding entry of the vector .
If the matrix is positive semidefinite matrix, then is a convex function: In this case the quadratic program has a global minimizer if there exists some feasible vector (satisfying the constraints) and if is bounded below on the feasible region. If the matrix is positive definite and the problem has a feasible solution, then the global minimizer is unique.
If is zero, then the problem becomes a linear program.
A related programming problem, quadratically constrained quadratic programming, can be posed by adding quadratic constraints on the variables.
Contents |
For general problems a variety of methods are commonly used, including
Convex quadratic programming is a special case of the more general field of convex optimization.
Quadratic programming is particularly simple when there are only equality constraints; specifically, the problem is linear. By using Lagrange multipliers and seeking the extremum of the Lagrangian, it may be readily shown that the solution to the equality constrained problem is given by the linear system:
where is a set of Lagrange multipliers which come out of the solution alongside .
The easiest means of approaching this system is direct solution (for example, LU factorization), which for small problems is very practical. For large problems, the system poses some unusual difficulties, most notably that problem is never positive definite (even if is), making it potentially very difficult to find a good numeric approach, and there are many approaches to choose from dependent on the problem [5].
If the constraints don't couple the variables too tightly, a relatively simple attack is to change the variables so that constraints are unconditionally satisfied. For example, suppose (generalizing to nonzero is straightforward). Looking at the constraint equations:
introduce a new variable defined by
where has dimension of minus the number of constraints. Then
and if is chosen so that the constraint equation will be always satisfied. Finding such entails finding the null space of , which is more or less simple depending on the structure of . Substituting into the quadratic form gives an unconstrained minimization problem:
the solution of which is given by:
Under certain conditions on , the reduced matrix will be positive definite. It's possible to write a variation on the conjugate gradient method which avoids the explicit calculation of [6].
The Lagrangian dual of a QP is also a QP. To see that let us focus on the case where and Q is positive definite. We write the Lagrangian function as
Defining the (Lagrangian) dual function , defined as , we find an infimum of , using :
hence the Lagrangian dual of the QP is
maximize:
subject to: .
Besides the Lagrangian duality theory, there are other duality pairings (e.g. Wolfe, etc.).
For positive definite Q, the ellipsoid method solves the problem in polynomial time.[7] If, on the other hand, Q is indefinite, then the problem is NP-hard.[8] In fact, even if Q has only one negative eigenvalue, the problem is NP-hard.[9]
Free open-source permissive licenses:
Name | License | Brief info |
---|---|---|
OpenOpt | BSD | Universal cross-platform numerical optimization framework, see its QP page and other problems involved. Uses NumPy arrays and SciPy sparse matrices. |
qp-numpy | BSD | Built around the qld code written by K.Schittkowski of the University of Bayreuth, Germany |
Free open-source copyleft (reciprocal) licenses:
Name | License | Brief info |
---|---|---|
CVXOPT | GPL | General purpose convex optimization solver written in Python
OOQP |
Octave | GPL | General purpose GNU Octave solver for Quadratic Programming problems |
Other Free open-source licenses:
Name | License | Brief info |
---|---|---|
CGAL | QPL | An exact solver QP_solver, part of the Computational Geometry Algorithms Library (CGAL) |
Name | Brief info |
---|---|
AIMMS | |
AMPL | |
CPLEX | Popular solver with an API for several programming languages |
EXCEL Solver Function | |
FinMath | A .NET numerical library containing an interior-point primal-dual solver for convex quadratic programming problems. |
GAMS | |
Gurobi | Solver with parallel algorithms for large-scale linear programs, quadratic programs and mixed-integer programs. Free for academic use. |
Lingo | |
MATLAB | A general-purpose and matrix-oriented programming-language for numerical computing. Quadratic programming in MATLAB requires the Optimization Toolbox in addition to the base MATLAB product |
Mathematica | A general-purpose programming-language for mathematics, including symbolic and numerical capabilities. |
MOSEK | A solver for large scale optimization with API for several languages (C++,java,.net, Matlab and python) |
OptimJ | Free Java-based Modeling Language for Optimization supporting multiple target solvers and available as an Eclipse plugin.[10][11] |
Solver Foundation | A .NET platform for modeling, scheduling, and optimization |
TOMLAB | Supports global optimization, integer programming, all types of least sqaures, linear, quadratic and unconstrained programming for MATLAB. TOMLAB supports solvers like Gurobi, CPLEX, SNOPT and KNITRO. |
Xpress |
|